Skip to main content

Running a Prompt

Once all configurations are set, users can run the selected prompt and monitor its real-time output. This interactive process allows for quick feedback and adjustments, enabling users to refine their input and the AI's response to achieve the desired outcomes.

Example: Running an Audio Diarization Prompt

For example, when running the Audio Diarization prompt, users can input data such as an audio transcription, and the AI model will process it and return a detailed response.

  • User Input: Users provide input data, such as text, code, or audio transcriptions. For example, in the case of Audio Diarization, the input could be an audio file or a transcript.
  • Model Output: The model processes the input and generates a response. In the Audio Diarization example, the model might return a transcription with timestamps and speaker separation, helping to identify different speakers and the timing of their speech.

This interaction follows a dialogue format, clearly marking User inputs and Model outputs, making it easy to track and understand the conversation flow.


Adjusting Parameters and Re-Running the Prompt

Once the initial run is complete, users can adjust the prompt's parameters to refine the output. These adjustments can be made on the right-hand side of the screen, where the Run Settings are displayed.

Key Adjustable Parameters:

  • Token Count: Control the total number of tokens used in the response. This setting limits the length of the output, preventing overly long or short responses.
  • Temperature: Set how creative or deterministic the AI's response should be. A higher value (e.g., 0.8) results in more creative, varied outputs, while a lower value (e.g., 0.2) makes the output more focused and deterministic.
  • JSON Mode: Enable or disable JSON mode to handle structured data exchanges. This is particularly useful when working with APIs or other systems that require a structured input-output format.

After modifying these parameters, users can click Run to execute the prompt again and see how the output changes based on the new settings. This iterative process allows users to fine-tune their results until they meet their specific needs.


Saving the Conversation

After running the prompt and reviewing the output, users have the option to save a copy of the conversation. This includes both the input and output, and it can be useful for tracking changes, analyzing results, or sharing findings with team members.

Saving Options:

  • Save a Copy: Save the entire conversation for later reference. This feature enables users to keep a record of the prompt's behavior for future use or analysis.
  • Get a Code: Users can click Get a Code to generate a unique code for the current prompt configuration. This code can be used to quickly reference or reuse the prompt setup in future sessions, saving time and ensuring consistency.

By saving the outputs, users can build a library of reusable prompts that can be fine-tuned or adapted for different tasks and scenarios.


Best Practices for Running Prompts

To optimize the results of running AI prompts, follow these best practices:

  • Test Multiple Parameters: Experiment with different values for parameters like Temperature and Token Count to find the best configuration for your specific task.
  • Iterate Frequently: Don’t hesitate to re-run the prompt with slight adjustments. This iterative approach helps you refine the output and achieve more accurate results.
  • Use Descriptive Inputs: Providing clear, specific, and well-defined input will guide the AI model toward generating more accurate and relevant responses.

By applying these best practices, users can maximize the potential of the Prompt Gallery and harness the full power of AI-driven automation and analysis.